178 research outputs found
Recommended from our members
On Optimal and Fair Service Allocation in Mobile Cloud Computing
This paper studies the optimal and fair service allocation for a variety of
mobile applications (single or group and collaborative mobile applications) in
mobile cloud computing. We exploit the observation that using tiered clouds,
i.e. clouds at multiple levels (local and public) can increase the performance
and scalability of mobile applications. We proposed a novel framework to model
mobile applications as a location-time workflows (LTW) of tasks; here users
mobility patterns are translated to mobile service usage patterns. We show that
an optimal mapping of LTWs to tiered cloud resources considering multiple QoS
goals such application delay, device power consumption and user cost/price is
an NP-hard problem for both single and group-based applications. We propose an
efficient heuristic algorithm called MuSIC that is able to perform well (73% of
optimal, 30% better than simple strategies), and scale well to a large number
of users while ensuring high mobile application QoS. We evaluate MuSIC and the
2-tier mobile cloud approach via implementation (on real world clouds) and
extensive simulations using rich mobile applications like intensive signal
processing, video streaming and multimedia file sharing applications. Our
experimental and simulation results indicate that MuSIC supports scalable
operation (100+ concurrent users executing complex workflows) while improving
QoS. We observe about 25% lower delays and power (under fixed price
constraints) and about 35% decrease in price (considering fixed delay) in
comparison to only using the public cloud. Our studies also show that MuSIC
performs quite well under different mobility patterns, e.g. random waypoint and
Manhattan models
FedGen: Generalizable Federated Learning for Sequential Data
Existing federated learning models that follow the standard risk minimization
paradigm of machine learning often fail to generalize in the presence of
spurious correlations in the training data. In many real-world distributed
settings, spurious correlations exist due to biases and data sampling issues on
distributed devices or clients that can erroneously influence models. Current
generalization approaches are designed for centralized training and attempt to
identify features that have an invariant causal relationship with the target,
thereby reducing the effect of spurious features. However, such invariant risk
minimization approaches rely on apriori knowledge of training data
distributions which is hard to obtain in many applications. In this work, we
present a generalizable federated learning framework called FedGen, which
allows clients to identify and distinguish between spurious and invariant
features in a collaborative manner without prior knowledge of training
distributions. We evaluate our approach on real-world datasets from different
domains and show that FedGen results in models that achieve significantly
better generalization and can outperform the accuracy of current federated
learning approaches by over 24%
Verifiable Round-Robin Scheme for Smart Homes
Advances in sensing, networking, and actuation technologies have resulted in
the IoT wave that is expected to revolutionize all aspects of modern society.
This paper focuses on the new challenges of privacy that arise in IoT in the
context of smart homes. Specifically, the paper focuses on preventing the
user's privacy via inferences through channel and in-home device activities. We
propose a method for securely scheduling the devices while decoupling the
device and channels activities. The proposed solution avoids any attacks that
may reveal the coordinated schedule of the devices, and hence, also, assures
that inferences that may compromise individual's privacy are not leaked due to
device and channel level activities. Our experiments also validate the proposed
approach, and consequently, an adversary cannot infer device and channel
activities by just observing the network traffic.Comment: Accepted in ACM Conference on Data and Application Security and
Privacy (CODASPY), 2019. 12 page
FLIPS: Federated Learning using Intelligent Participant Selection
This paper presents the design and implementation of FLIPS, a middleware
system to manage data and participant heterogeneity in federated learning (FL)
training workloads. In particular, we examine the benefits of label
distribution clustering on participant selection in federated learning. FLIPS
clusters parties involved in an FL training job based on the label distribution
of their data apriori, and during FL training, ensures that each cluster is
equitably represented in the participants selected. FLIPS can support the most
common FL algorithms, including FedAvg, FedProx, FedDyn, FedOpt and FedYogi. To
manage platform heterogeneity and dynamic resource availability, FLIPS
incorporates a straggler management mechanism to handle changing capacities in
distributed, smart community applications. Privacy of label distributions,
clustering and participant selection is ensured through a trusted execution
environment (TEE). Our comprehensive empirical evaluation compares FLIPS with
random participant selection, as well as two other "smart" selection mechanisms
- Oort and gradient clustering using two real-world datasets, two different
non-IID distributions and three common FL algorithms (FedYogi, FedProx and
FedAvg). We demonstrate that FLIPS significantly improves convergence,
achieving higher accuracy by 17 - 20 % with 20 - 60 % lower communication
costs, and these benefits endure in the presence of straggler participants
- …